30 research outputs found
ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics
As machine translation (MT) metrics improve their correlation with human
judgement every year, it is crucial to understand the limitations of such
metrics at the segment level. Specifically, it is important to investigate
metric behaviour when facing accuracy errors in MT because these can have
dangerous consequences in certain contexts (e.g., legal, medical). We curate
ACES, a translation accuracy challenge set, consisting of 68 phenomena ranging
from simple perturbations at the word/character level to more complex errors
based on discourse and real-world knowledge. We use ACES to evaluate a wide
range of MT metrics including the submissions to the WMT 2022 metrics shared
task and perform several analyses leading to general recommendations for metric
developers. We recommend: a) combining metrics with different strengths, b)
developing metrics that give more weight to the source and less to
surface-level overlap with the reference and c) explicitly modelling additional
language-specific information beyond what is available via multilingual
embeddings.Comment: preprint for WMT 202
ACES: Translation Accuracy Challenge Sets at WMT 2023
We benchmark the performance of segmentlevel metrics submitted to WMT 2023
using the ACES Challenge Set (Amrhein et al., 2022). The challenge set consists
of 36K examples representing challenges from 68 phenomena and covering 146
language pairs. The phenomena range from simple perturbations at the
word/character level to more complex errors based on discourse and real-world
knowledge. For each metric, we provide a detailed profile of performance over a
range of error categories as well as an overall ACES-Score for quick
comparison. We also measure the incremental performance of the metrics
submitted to both WMT 2023 and 2022. We find that 1) there is no clear winner
among the metrics submitted to WMT 2023, and 2) performance change between the
2023 and 2022 versions of the metrics is highly variable. Our recommendations
are similar to those from WMT 2022. Metric developers should focus on: building
ensembles of metrics from different design families, developing metrics that
pay more attention to the source and rely less on surface-level overlap, and
carefully determining the influence of multilingual embeddings on MT
evaluation.Comment: Camera Ready WMT 2023. arXiv admin note: text overlap with
arXiv:2210.1561
Incorporating pronoun function into statistical machine translation
Pronouns are used frequently in language, and perform a range of functions.
Some pronouns are used to express coreference, and others are not. Languages
and genres differ in how and when they use pronouns and this poses a problem
for Statistical Machine Translation (SMT) systems (Le Nagard and Koehn,
2010; Hardmeier and Federico, 2010; NovĂĄk, 2011; Guillou, 2012; Weiner, 2014;
Hardmeier, 2014). Attention to date has focussed on coreferential (anaphoric)
pronouns with NP antecedents, which when translated from English into a language
with grammatical gender, must agree with the translation of the head of
the antecedent. Despite growing attention to this problem, little progress has
been made, and little attention has been given to other pronouns.
The central claim of this thesis is that pronouns performing different functions
in text should be handled differently by SMT systems and when evaluating
pronoun translation. This motivates the introduction of a new framework to
categorise pronouns according to their function: Anaphoric/cataphoric reference,
event reference, extra-textual reference, pleonastic, addressee reference, speaker
reference, generic reference, or other function. Labelling pronouns according to
their function also helps to resolve instances of functional ambiguity arising from
the same pronoun in the source language having multiple functions, each with different
translation requirements in the target language. The categorisation framework
is used in corpus annotation, corpus analysis, SMT system development and
evaluation.
I have directed the annotation and conducted analyses of a parallel corpus of
English-German texts called ParCor (Guillou et al., 2014), in which pronouns
are manually annotated according to their function. This provides a first step
toward understanding the problems that SMT systems face when translating pronouns.
In the thesis, I show how analysis of manual translation can prove useful in
identifying and understanding systematic differences in pronoun use between two
languages and can help inform the design of SMT systems. In particular, the analysis
revealed that the German translations in ParCor contain more anaphoric and
pleonastic pronouns than their English originals, reflecting differences in pronoun
use. This raises a particular problem for the evaluation of pronoun translation.
Automatic evaluation methods that rely on reference translations to assess pronoun
translation, will not be able to provide an adequate evaluation when the
reference translation departs from the original source-language text. I also show
how analysis of the output of state-of-the-art SMT systems can reveal how well
current systems perform in translating different types of pronouns and indicate
where future efforts would be best directed. The analysis revealed that biases
in the training data, for example arising from the use of âitâ and âesâ as both
anaphoric and pleonastic pronouns in both English and German, is a problem
that SMT systems must overcome. SMT systems also need to disambiguate the
function of those pronouns with ambiguous surface forms so that each pronoun
may be translated in an appropriate way.
To demonstrate the value of this work, I have developed an automated post-editing
system in which automated tools are used to construct ParCor-style annotations
over the source-language pronouns. The annotations are then used to resolve
functional ambiguity for the pronoun âitâ with separate rules applied to the
output of a baseline SMT system for anaphoric vs. non-anaphoric instances. The
system was submitted to the DiscoMT 2015 shared task on pronoun translation
for English-French. As with all other participating systems, the automatic post-editing
system failed to beat a simple phrase-based baseline. A detailed analysis,
including an oracle experiment in which manual annotation replaces the automated
tools, was conducted to discover the causes of poor system performance.
The analysis revealed that the design of the rules and their strict application to
the SMT output are the biggest factors in the failure of the system.
The lack of automatic evaluation metrics for pronoun translation is a limiting
factor in SMT system development. To alleviate this problem, Christian Hardmeier
and I have developed a testing regimen called PROTEST comprising (1)
a hand-selected set of pronoun tokens categorised according to the different problems
that SMT systems face and (2) an automated evaluation script. Pronoun
translations can then be automatically compared against a reference translation,
with mismatches referred for manual evaluation. The automatic evaluation was
applied to the output of systems submitted to the DiscoMT 2015 shared task
on pronoun translation. This again highlighted the weakness of the post-editing
system, which performs poorly due to its focus on producing gendered pronoun
translations, and its inability to distinguish between pleonastic and event reference
pronouns
Automatic Reference-Based Evaluation of Pronoun Translation Misses the Point
We compare the performance of the APT and AutoPRF metrics for pronoun
translation against a manually annotated dataset comprising human judgements as
to the correctness of translations of the PROTEST test suite. Although there is
some correlation with the human judgements, a range of issues limit the
performance of the automated metrics. Instead, we recommend the use of
semi-automatic metrics and test suites in place of fully automatic metrics.Comment: EMNLP 201